Goto

Collaborating Authors

 rew ard


Learning to Coordinate Efficiently: A Model-based Approach

Brafman, R. I., Tennenholtz, M.

arXiv.org Artificial Intelligence

Pla y ers parti ipating in su h games m ust learn to o ordinate with ea h other in order to re eiv e the highest-p ossible v alue. A n um b er of reinfor emen t learning algorithms ha v e b een prop osed for this problem, and some ha v e b een sho wn to on v erge to go o d solutions in the limit. In this pap er w e sho w that using v ery simple mo del-based algorithms, m u h b etter (i.e., p olynomial) on v ergen e rates an b e attained. Moreo v er, our mo del-based algorithms are guaran teed to on v erge to the optimal v alue, unlik e man y of the existing algorithms. The distributed nature of su h systems mak es the problem of learning to a t in an unkno wn en vironmen t more diÆ ult b e ause the agen ts m ust o ordinate b oth their learning pro ess and their a tion hoi es. Ho w ev er, the need to o ordinate is not restri ted to distributed agen ts, as it arises naturally among self-in terested agen ts in ertain en vironmen ts. A go o d mo del for su h en vironmen ts is that of a ommon-inter est sto hasti game (CISG). A sto hasti game (Shapley, 1953) is a mo del of m ulti-agen t in tera tions onsisting of m ultiple nite or in nite stages, in ea h of whi h the agen ts pla y a one-shot strategi form game. The iden tit y of ea h stage dep ends sto hasti ally on the previous stage and the a tions p erformed b y the agen ts in that stage. The goal of ea h agen t is to maximize some fun tion of its rew ard stream - either its a v erage rew ard or its sum of dis oun ted rew ards. A CISG is a sto hasti game in whi h at ea h p oin t the pa y o of all agen ts is iden ti al. V arious algorithms for learning in CISGs ha v e b een prop osed in the literature.


The Communicative Multiagent Team Decision Problem: Analyzing Teamwork Theories and Models

Pynadath, D. V., Tambe, M.

arXiv.org Artificial Intelligence

Despite the significant progress in multiagent teamwork, existing research does not address the optimality of its prescriptions nor the complexity of the teamwork problem. Without a characterization of the optimality-complexity tradeoffs, it is impossible to determine whether the assumptions and approximations made by a particular theory gain enough efficiency to justify the losses in overall performance. To provide a tool for use by multiagent researchers in evaluating this tradeoff, we present a unified framework, the COMmunicative Multiagent Team Decision Problem (COM-MTDP). The COM-MTDP model combines and extends existing multiagent theories, such as decentralized partially observable Markov decision processes and economic team theory. In addition to their generality of representation, COM-MTDPs also support the analysis of both the optimality of team performance and the computational complexity of the agents' decision problem. In analyzing complexity, we present a breakdown of the computational complexity of constructing optimal teams under various classes of problem domains, along the dimensions of observability and communication cost. In analyzing optimality, we exploit the COM-MTDP's ability to encode existing teamwork theories and models to encode two instantiations of joint intentions theory taken from the literature. Furthermore, the COM-MTDP model provides a basis for the development of novel team coordination algorithms. We derive a domain-independent criterion for optimal communication and provide a comparative analysis of the two joint intentions instantiations with respect to this optimal policy. We have implemented a reusable, domain-independent software package based on COM-MTDPs to analyze teamwork coordination strategies, and we demonstrate its use by encoding and evaluating the two joint intentions strategies within an example domain.


A Convergent Online Single Time Scale Actor Critic Algorithm

Di Castro, D., Meir, R.

arXiv.org Artificial Intelligence

Actor-Critic based approaches were among the first to address reinforcement learning in a general setting. Recently, these algorithms have gained renewed interest due to their generality, good convergence properties, and possible biological relevance. In this paper, we introduce an online temporal difference based actor-critic algorithm which is proved to converge to a neighborhood of a local maximum of the average reward. Linear function approximation is used by the critic in order estimate the value function, and the temporal difference signal, which is passed from the critic to the actor. The main distinguishing feature of the present convergence proof is that both the actor and the critic operate on a similar time scale, while in most current convergence proofs they are required to have very different time scales in order to converge. Moreover, the same temporal difference signal is used to update the parameters of both the actor and the critic. A limitation of the proposed approach, compared to results available for two time scale convergence, is that convergence is guaranteed only to a neighborhood of an optimal value, rather to an optimal value itself. The single time scale and identical temporal difference signal used by the actor and the critic, may provide a step towards constructing more biologically realistic models of reinforcement learning in the brain.